人们普遍认为,美国政治语言的语气最近变得更加消极,尤其是当唐纳德·特朗普(Donald Trump)进入政治时。同时,关于特朗普是改变还是仅仅持续以前的趋势存在分歧。迄今为止,关于这些问题的数据驱动证据很少,部分原因是很难获得政客话语的全面,纵向记录。在这里,我们将心理语言工具应用于一个新闻中的2400万报价的新颖,全面的语料库,归因于18,627位美国政治家,以分析美国政客语言的语气在2008年至2020年之间的演变。我们表明,负面的频率在奥巴马任职期间,情感词不断下降,随着2016年主要运动的突然且持续增加了挑战前的标准偏差,即竞选前平均值的8%,以跨各方出现的模式。当省略特朗普的报价时,效果的规模下降了40%,当平均说话者而不是报价时,效果的规模下降了50%,这意味着著名的说话者,尤其是特朗普,尽管并不仅仅限于负面语言的贡献。这项工作提供了第一个大规模数据驱动的证据,表明特朗普的竞选活动开始作为催化剂,朝着更负面的政治语调转变,对有关美国政治状况的辩论产生了重要影响。
translated by 谷歌翻译
归因引号的使用是新闻中信息传播的最直接,最少过滤的途径。因此,引用在新闻报道的概念,接收和分析中起着核心作用。由于报价比常规报告提供了更直接的窗口,因此对于记者和研究人员来说,它们是宝贵的资源。尽管大量的研究工作已致力于自动提取新闻的报价及其归因于演讲者的方法,但很少有当代来源的全面归因报价可供公众提供。在这里,我们提出了一个自适应网络界面,用于搜索QuoteBank,这是新闻中的大量报价集合,我们可以在https://quotebank.dlab.tools上提供。
translated by 谷歌翻译
由于看不见和新兴实体的频率,新闻中的命名实体链接(NEL)是一项具有挑战性的努力,因此需要使用无监督或零摄像的方法。但是,这种方法往往会带来警告,例如不整合新兴实体的合适知识库(例如Wikidata),缺乏可扩展性和不良的可解释性。在这里,我们考虑在Quotebank中的人歧义,这是新闻中大量的说话者归类的语言,并调查了NEL在网络规模的语料库中直观,轻巧且可扩展的启发式方法的适用性。我们表现最好的启发式歧义分别在Quotebank和Aida-Conll基准上分别占94%和63%。此外,提出的启发式方法与最先进的无监督和零摄像方法,本本系和MGenRE相比,从而成为无监督和零照片实体链接的强基础。
translated by 谷歌翻译
在影响最大化(IM)的现实世界应用中,网络结构通常是未知的。因此,我们可以通过仅探索基础网络的一部分来确定最有影响力的种子节点,但对于节点查询的预算很小。由于收集节点元数据比通过查询节点调查节点之间的关系更具成本效益,我们提出了IM-Meta,这是一种端到端的解决方案,这是通过从查询和节点中检索信息的网络中IM的端到端解决方案元数据。但是,由于元数据的嘈杂性质和连通性推断的不确定性,使用这种元数据来帮助IM过程并非没有风险。为了应对这些挑战,我们制定了一个新的IM问题,旨在找到种子节点和查询节点。在IM-META中,我们开发了一种有效的方法,该方法可以迭代执行三个步骤:1)我们通过暹罗神经网络模型学习了收集的元数据和边缘之间的关系,2)我们选择了许多推断的自信边缘来构建增强的图形, 3)我们通过使用我们的拓扑感知的排名策略来最大程度地提高推断影响扩展,以确定查询的下一个节点。通过查询仅5%的节点,IM-META达到了上限性能的93%。
translated by 谷歌翻译
Coronary Computed Tomography Angiography (CCTA) provides information on the presence, extent, and severity of obstructive coronary artery disease. Large-scale clinical studies analyzing CCTA-derived metrics typically require ground-truth validation in the form of high-fidelity 3D intravascular imaging. However, manual rigid alignment of intravascular images to corresponding CCTA images is both time consuming and user-dependent. Moreover, intravascular modalities suffer from several non-rigid motion-induced distortions arising from distortions in the imaging catheter path. To address these issues, we here present a semi-automatic segmentation-based framework for both rigid and non-rigid matching of intravascular images to CCTA images. We formulate the problem in terms of finding the optimal \emph{virtual catheter path} that samples the CCTA data to recapitulate the coronary artery morphology found in the intravascular image. We validate our co-registration framework on a cohort of $n=40$ patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our results indicate that our non-rigid registration significantly outperforms other co-registration approaches for luminal bifurcation alignment in both longitudinal (mean mismatch: 3.3 frames) and rotational directions (mean mismatch: 28.6 degrees). By providing a differentiable framework for automatic multi-modal intravascular data fusion, our developed co-registration modules significantly reduces the manual effort required to conduct large-scale multi-modal clinical studies while also providing a solid foundation for the development of machine learning-based co-registration approaches.
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
The future of population-based breast cancer screening is likely personalized strategies based on clinically relevant risk models. Mammography-based risk models should remain robust to domain shifts caused by different populations and mammographic devices. Modern risk models do not ensure adaptation across vendor-domains and are often conflated to unintentionally rely on both precursors of cancer and systemic/global mammographic information associated with short- and long-term risk, respectively, which might limit performance. We developed a robust, cross-vendor model for long-term risk assessment. An augmentation-based domain adaption technique, based on flavorization of mammographic views, ensured generalization to an unseen vendor-domain. We trained on samples without diagnosed/potential malignant findings to learn systemic/global breast tissue features, called mammographic texture, indicative of future breast cancer. However, training so may cause erratic convergence. By excluding noise-inducing samples and designing a case-control dataset, a robust ensemble texture model was trained. This model was validated in two independent datasets. In 66,607 Danish women with flavorized Siemens views, the AUC was 0.71 and 0.65 for prediction of interval cancers within two years (ICs) and from two years after screening (LTCs), respectively. In a combination with established risk factors, the model's AUC increased to 0.68 for LTCs. In 25,706 Dutch women with Hologic-processed views, the AUCs were not different from the AUCs in Danish women with flavorized views. The results suggested that the model robustly estimated long-term risk while adapting to an unseen processed vendor-domain. The model identified 8.1% of Danish women accounting for 20.9% of ICs and 14.2% of LTCs.
translated by 谷歌翻译
Quaternion valued neural networks experienced rising popularity and interest from researchers in the last years, whereby the derivatives with respect to quaternions needed for optimization are calculated as the sum of the partial derivatives with respect to the real and imaginary parts. However, we can show that product- and chain-rule does not hold with this approach. We solve this by employing the GHRCalculus and derive quaternion backpropagation based on this. Furthermore, we experimentally prove the functionality of the derived quaternion backpropagation.
translated by 谷歌翻译
In this work, a method for obtaining pixel-wise error bounds in Bayesian regularization of inverse imaging problems is introduced. The proposed method employs estimates of the posterior variance together with techniques from conformal prediction in order to obtain coverage guarantees for the error bounds, without making any assumption on the underlying data distribution. It is generally applicable to Bayesian regularization approaches, independent, e.g., of the concrete choice of the prior. Furthermore, the coverage guarantees can also be obtained in case only approximate sampling from the posterior is possible. With this in particular, the proposed framework is able to incorporate any learned prior in a black-box manner. Guaranteed coverage without assumptions on the underlying distributions is only achievable since the magnitude of the error bounds is, in general, unknown in advance. Nevertheless, experiments with multiple regularization approaches presented in the paper confirm that in practice, the obtained error bounds are rather tight. For realizing the numerical experiments, also a novel primal-dual Langevin algorithm for sampling from non-smooth distributions is introduced in this work.
translated by 谷歌翻译